Bootstrap Tail Thickness Estimation for Symmetric Alpha-Stable Random Variables

نویسنده

  • Brandon Franzke
چکیده

A new theorem shows that a bootstrap algorithm can estimate the impulsivity or tail thickness of symmetric a-stable (SαS) signals. SαS bell curves include the Gaussian bell curve as a special but non-impulsive case. Signals grow more impulsive as the bell curve’s tail thickness increases or as the tail-thickness parameter α falls from 2 toward 0. The thin-tailed Gaussian bell curve has α = 2. The algorithm computes a statistic from SαS samples and then matches the test statistic against a continuum of precomputed values to the find the estimated tail thickness α̂. The theorem and a corollary show that the statistic is invertible because it is a continuous bijection. So the boostrapped α̂ is a consistent estimator of α in general. Simulations show that α̂ is robust for signals with α ∈ [0.2, 2] and that the estimator error decreases as the number of samples increases. 1. Robust Estimation of Symmetric α-Stable Tail Thickness. We show that a bootstrap algorithm can estimate the impulsiveness of a sequence of symmetric α-stable (SαS) random samples [1], [2], [3], [4], [5], [6]. The algorithm estimates the tailthickness parameter α by interpolating a sample statistic between precomputed values. Figure 6 and Table 1 show that the algorithm applies to SαS random variables with α ∈ [0.2, 2]. A theorem shows that each α corresponds to a unique value of a sample statistic τ (Xα). A corollary to the theorem shows that the map is a bijection. The algorithm estimates α by calculating τ (Xα) and then inverting the bijection (Figure 5). The estimator α̂ applies to all finite sequences of independent and identically distributed (i.i.d.) SαS random variables. Symmetric α-stable random variables have thick power-law tails and generalize the Gaussian probability density function (pdf) [7], [8], [9], [10], [11], [12], [13], [14]. The tail-thickness parameter α lies in (0, 2] and controls the impulsivity of samples drawn from the random variable Xα. Figure 1 shows the inverse relation between α and the thickness of the bell-curve tails. Figure 2 shows how α controls impulsiveness of samples from the distribution. The Gaussian pdf takes α = 2. The thickertailed Cauchy pdf takes α = 1. The moments of an α-stable random variable are finite only up to order k for k < α. Only the Gaussian random variable has finite second and higher-order moments. The usual central limit theorem states that a standardized sum of finite-variance random variables converges in distribution to the standard normal distribution Z ∼ N (0, 1) [15], [16]. The generalized central limit theorem states a similar result for infinite-variance α-stable random variables [17], [18]: a standardized sum of α-stable random variables converges in distribution to an α-stable random variable with the same α. It also shows this holds only for α-stable random variables. Brandon Franzke and Bart Kosko are with the Signal and Image Processing Institute, Department of Electrical Engineering, University of Southern California, Los Angeles, California 90089, USA (email: [email protected]) Real noise tends to be impulsive. Natural sources of impulsive signals include condensed and soft matter physics [19], [20], [21], geophysics [22], meteorology [23], biology [24], economics [25], [26], [27], fractional kinetics [28], [29], and communications [14], [30], [31], [32], [33]. Many random models assume that the dispersion of a random variable is equal to its finite variance or its squared-error from the population mean. Impulsive signals violate this finite variance assumption in general. So these model may wrongly dismiss important “rare” events as outliers. Selection of a bell-curve signal model requires empirical tests to estimate the actual tail thickness. Finding the optimal SαS bellcurve for a given symmetric signal pattern is an open research problem. The α-Stable Estimation Algorithm in Section 3 estimates α from a sequence of observed SαS random samples. The algorithm computes the estimator α̂ in two stages: (1) it constructs a bijective τ-map between a test statistic τ (Xα) and α and (2) it computes τ (Xα) for observed random samples and then uses the τ−1-map to find α̂. The α-Stable Estimation Map Theorem in Section 2 ensures that the τ-map is unique. A corollary shows that the τ-map is a bijection and thus has an inverse α̂ = τ−1 (τ (Xα)). The τ-map is also continuous. Thus α̂ converges in probability to α and so is consistent because α̂ is a consistent estimator of α since τ̂ (Xα) is a consistent estimator of τ (Xα) [15]. Section 4 presents simulations to show that α̂ is a good estimator for α ∈ [0.2, 2]. 2. The α-Stable Estimation Map Theorem The α-Stable Estimation Algorithm computes an estimator α̂ of the tail-thickness parameter α. It computes a sample statistic τ (Xα) from a sequence of observed SαS samples and then estimates α through the τ−1-map that maps from τ (Xα) to α. The α-Stable Estimation Map Theorem guarantees that on average the τ-map generates distinct τ (Xα) for two independent sequences of SαS independent and identically distributed (i.i.d.) random variables Xα1 and Xα2 if α1 , α2. The α-Stable Estimation Algorithm uses a corollary to ensure the inverse exists. The corollary thus allows the algorithm to estimate α through the τ−1-map. The algorithm computes a test statistic for Xα that resembles a vector p-norm. The test statistic is finite because the pth-sample moment of a finite sequence of such realizations is finite for any finite p > 0. Suppose Xα is a sequence of N i.i.d. SαS random variables Xα with pdf fα (x). Suppose the random variable has α ∈ (0, 2] and has unit dispersion: γ = 1. Suppose further that N is finite. Then the sample maximum H (xα) = max 1≤k≤N |xα [k]| < ∞ (1) almost surely since limx→∞ fα (x) = 0. Define gp by the length-normalized sample p-norm for finite Fig. 1. Symmetric α-stable probability density functions. The figure shows SαS probability density functions for α = 2.0 (Gaussian), 1.8, 1.5, and 1.0 (Cauchy). The thickness of the bell curve tails increases as α decreases. Thicker tails correspond to more impulsive samples. The Gaussian bell curve is the only SαS distribution with finite moments of order k ≥ 2. Choosing α for a given application is an empirical question. −5 0 5 α=2

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Array Signal Processing with Alpha Stable Distributions

x Introduction Literature Review Dissertation Organization and Contribution Abbreviations Array Signal Processing Fundamentals and Current Approaches Problem Formulation Maximum Likelihood DOA Estimation with Gaussian Distributions The Stochastic Maximum Likelihood Method The Deterministic Maximum Likelihood Method The Deterministic Cram er Rao Bound for Gaussian Noise Subspace Based DOA Estima...

متن کامل

Statistical Topology Using the Nonparametric Density Estimation and Bootstrap Algorithm

This paper presents approximate confidence intervals for each function of parameters in a Banach space based on a bootstrap algorithm. We apply kernel density approach to estimate the persistence landscape. In addition, we evaluate the quality distribution function estimator of random variables using integrated mean square error (IMSE). The results of simulation studies show a significant impro...

متن کامل

A Survey on Simulating Stable Random Variables

In general case, Chambers et al. (1976) introduced the following algorithm for simulating any stable random variables $ X/sim(alpha, beta, gamma,&nbsp;delta) $ with four parameters. They use a nonlinear transformation of two independent uniform random variables for simulating an stable random variable... (to continue, click here)

متن کامل

Robust stochastic resonance for simple threshold neurons.

Simulation and theoretical results show that memoryless threshold neurons benefit from small amounts of almost all types of additive noise and so produce the stochastic-resonance or SR effect. Input-output mutual information measures the performance of such threshold systems that use subthreshold signals. The SR result holds for all possible noise probability density functions with finite varia...

متن کامل

SOVA decoding in Symmetric Alpha-Stable Noise

Soft-Output Viterbi Algorithm (SOVA) is one type of recovery memory-less Markov Chain and is used widely to decode convolutional codes. Fundamentally, conventional SOVA is designed on the basis of Maximum A-Posteriori Probability (APP) with the assumption of normal distribution. Therefore, conventional SOVA fails miserably in the presence of symmetric alpha stable noise S S α which is one form ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011